# Rotary position embedding
Modernbert Base Ita
Apache-2.0
ModernBERT is a modern bidirectional encoder-only Transformer model (BERT-style), pre-trained on 2 trillion tokens of English and code data, with a native context length of up to 8,192 tokens.
Large Language Model
Transformers Supports Multiple Languages

M
DeepMount00
81
10
E5rope Base
MIT
E5-RoPE-Base is an embedding model based on rotary position embedding (RoPE), designed to support long-context retrieval tasks.
Text Embedding English
E
dwzhu
129
17
Eva02 Tiny Patch14 224.mim In22k
MIT
EVA02 is a Vision Transformer model pre-trained on ImageNet-22k through masked image modeling, suitable for image classification and feature extraction tasks.
Image Classification
Transformers

E
timm
385
1
Wav2vec2 Conformer Rope Large 960h Ft
Apache-2.0
This model incorporates rotary position embedding technology, is pre-trained and fine-tuned on 960 hours of LibriSpeech data sampled at 16kHz, and is suitable for English speech recognition tasks.
Speech Recognition
Transformers English

W
facebook
22.02k
10
Featured Recommended AI Models